Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix BTD/Scrape Flush Count with Filters #4657

Merged
merged 2 commits into from
Feb 2, 2024

Conversation

ax3l
Copy link
Member

@ax3l ax3l commented Jan 31, 2024

Move the counting of already flushed particles for writers that call the I/O backends multiple time per data set, e.g., BTD and boundary scraping, into the I/O backend.

Currently (until #4420), filtering is done as the first step in I/O backends and thus the previous count outside of the I/O backends was over-counting particles that might still get filtered out.

Regression to #4570.

Fix #4604.

@ax3l ax3l added bug Something isn't working bug: affects latest release Bug also exists in latest release version component: diagnostics all types of outputs component: boundary PML, embedded boundaries, et al. component: boosted frame boosted frame components & logic labels Jan 31, 2024
@@ -399,8 +399,6 @@ private:
lab-frame data. */
void InitializeParticleFunctors () override;

/** Update total number of particles flushed for all species for ith snapshot */
void UpdateTotalParticlesFlushed(int i_buffer);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed and delegated to the I/O backend, which currently does the filtering and thus knows what number of particles is truly written.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this also done for plotfile?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah I see - its not needed for plotfile - because we just merge headers and interleave data . this was needed mainly for openpmd.
all good!

@ax3l
Copy link
Member Author

ax3l commented Jan 31, 2024

cc @roelof-groenewald just FYI for scraping with filters

@ax3l ax3l force-pushed the fix-btd-scrape-offset-cnt branch 4 times, most recently from 081e7c1 to 140ba0b Compare February 1, 2024 08:31
@ax3l

This comment was marked as resolved.

@ax3l ax3l force-pushed the fix-btd-scrape-offset-cnt branch 2 times, most recently from 43883bb to a44f743 Compare February 1, 2024 17:51
@ax3l ax3l force-pushed the fix-btd-scrape-offset-cnt branch from a44f743 to 39d510f Compare February 1, 2024 18:00
@guj
Copy link
Contributor

guj commented Feb 1, 2024

Just checked, the input file in issue 4604 works!

@guj
Copy link
Contributor

guj commented Feb 1, 2024

I noticed orders of inputs are shuffled in several functions. Why is that?

@ax3l
Copy link
Member Author

ax3l commented Feb 1, 2024

I noticed orders of inputs are shuffled in several functions. Why is that?

I pass the offsets as references now, thus I cannot use default parameters anymore.

@guj
Copy link
Contributor

guj commented Feb 1, 2024

I noticed orders of inputs are shuffled in several functions. Why is that?

I pass the offsets as references now, thus I cannot use default parameters anymore.

Yes, I saw that. But orders of inputs are different. For example totalParticlesFlushedAlready was placed a few lines up in FlushFormat.H. Just curious.

ax3l added 2 commits February 1, 2024 13:39
Move the counting of already flushed particles for writers that call the
I/O backends multiple time per data set, e.g., BTD and boundary scraping,
into the I/O backend.

Currently, filtering is done as the first step in I/O backends and thus
the previous count outside of the I/O backends was over-counting
particles that might still get filtered out.

Offset should be a `long`:
Overflow risk is very high for pure `int`.
Also, counter is `unsigned`, so `unsigned long` for now.
Less state we can forget in checkpoint-restart and that we
have to transfer across API boundaries.
@ax3l ax3l force-pushed the fix-btd-scrape-offset-cnt branch from 39d510f to 2483724 Compare February 1, 2024 22:17
@ax3l ax3l requested a review from atmyers February 1, 2024 22:21
@ax3l
Copy link
Member Author

ax3l commented Feb 1, 2024

Simplified it further to get rid of state that we will forget during checkpoint-restart :) Too much back and forth over the frontend-backend APIs.

WARPX_ALWAYS_ASSERT_WITH_MESSAGE(
std::rename(recent_ParticleHdrFilename.c_str(), snapshot_ParticleHdrFilename.c_str()) == 0,
std::string("Renaming ").append(recent_ParticleHdrFilename).append(" to ").append(snapshot_ParticleHdrFilename).append(" has failed"));
if (!amrex::FileExists(snapshot_ParticleHdrFilename)) {
Copy link
Member Author

@ax3l ax3l Feb 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RevathiJambunathan @atmyers highlighting this: I want to remove m_totalParticles_flushed_already because it is brittle state.

Let me know if you would add another check for this line.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is good!

@RevathiJambunathan
Copy link
Member

Looks good to me! @ax3l :) Thanks for this PR

@RemiLehe RemiLehe enabled auto-merge (squash) February 2, 2024 00:28
@RemiLehe RemiLehe merged commit 1093f61 into BLAST-WarpX:development Feb 2, 2024
41 checks passed
@ax3l ax3l deleted the fix-btd-scrape-offset-cnt branch February 2, 2024 02:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug: affects latest release Bug also exists in latest release version bug Something isn't working component: boosted frame boosted frame components & logic component: boundary PML, embedded boundaries, et al. component: diagnostics all types of outputs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Backtransform diagnostics crash when filters remove all particles
4 participants